Delphi’s COVIDcast Project:
An Ecosystem for Tracking and Forecasting the Pandemic

Ryan Tibshirani
Statistics and Machine Learning
Carnegie Mellon University
Amazon Scholar, AWS Labs




November 3, 2020

Delphi Then

Delphi Now

COVIDcast Ecosystem

This Talk

I can’t cover all of this! I’ll focus on our data sources, our API, and some basic demos (“real” modeling work will have to be for a future talk …)

Outline:

  1. Data sources
  2. API and client support
  3. Dive into symptom surveys

Reproducible talk: all code included

Part 1: Data Sources

Part 1: Data Sources

Severity Pyramid

What Is This Used For?

COVIDcast Indicators

COVIDcast Indicators (Cont.)

Example: Deaths

How many people have died from COVID-19 per day, in my state, since March 1?

library(covidcast)
start_day = "2020-03-01"
end_day = "2020-10-28"
deaths = covidcast_signal(data_source = "usa-facts", 
                          signal = "deaths_7dav_incidence_num", 
                          start_day = start_day, end_day = end_day,
                          geo_type = "state", geo_values = "pa")

plot(deaths, plot_type = "line", 
     title = "New COVID-19 deaths in PA (7-day average)")

Example: Hospitalizations

What percentage of daily hospital admissions are due to COVID-19 in PA, NY, TX?

hosp = covidcast_signal(data_source = "hospital-admissions", 
                        signal = "smoothed_adj_covid19",
                        start_day = start_day, end_day = end_day,
                        geo_type = "state", geo_values = c("pa", "ny", "tx"))

plot(hosp, plot_type = "line", 
     title = "% of hospital admissions due to COVID-19")

Example: Cases

What does the current COVID-19 incident case rate look like, nationwide?

cases = covidcast_signal(data_source = "usa-facts", 
                         signal = "confirmed_7dav_incidence_prop",
                         start_day = end_day, end_day = end_day)

plot(cases, title = "New COVID-19 cases per 100,000 people")

Example: Cases (Cont.)

What does the current COVID-19 cumulative case rate look like, nationwide?

cases = covidcast_signal(data_source = "usa-facts", 
                         signal = "confirmed_cumulative_prop",
                         start_day = end_day, end_day = end_day)

plot(cases, title = "Cumulative COVID-19 cases per 100,000 people", 
     choro_params = list(legend_n = 6))

Example: Cases (Cont.)

Where is the current COVID-19 cumulative case rate greater than 2%?

plot(cases, choro_col = c("#D3D3D3", "#FFC0CB"), 
     title = "Cumulative COVID-19 cases per 100,000 people",
     choro_params = list(breaks = c(0, 2000), legend_width = 3))

Example: Doctor’s Visits

How do some cities compare in terms of doctor’s visits due to COVID-like illness?

dv = covidcast_signal(data_source = "doctor-visits", 
                      signal = "smoothed_adj_cli", 
                      start_day = start_day, end_day = end_day,
                      geo_type = "msa", 
                      geo_values = name_to_cbsa(c("Pittsburgh", "New York", 
                                                  "San Antonio", "Miami")))

plot(dv, plot_type = "line", 
     title = "% of doctor's visits due to COVID-like illness")

Example: Symptoms

How do my county and my friend’s county compare in terms of COVID symptoms?

sympt = covidcast_signal(data_source = "fb-survey", 
                         signal = "smoothed_hh_cmnty_cli",
                         start_day = "2020-04-15", end_day = end_day,
                         geo_values = c(name_to_fips("Allegheny"),
                                        name_to_fips("Fulton", state = "GA")))

plot(sympt, plot_type = "line", range = range(sympt$value),
     title = "% of people who know somebody with COVID symptoms")

Part 2: API and Client Support

Part 2: API and Client Support

COVIDcast API

The COVIDcast API is based on HTTP GET queries and returns data in JSON form. The base URL is https://api.covidcast.cmu.edu/epidata/api.php?endpoint=covidcast


Parameter Description Examples
data_source data source doctor-visits or fb-survey
signal signal derived from data source smoothed_cli or smoothed_adj_cli
time_type temporal resolution of the signal day or week
geo_type spatial resolution of the signal county, hrr, msa, or state
time_values time units over which events happened 20200406 or 20200406-20200410
geo_value location codes, depending on geo_type * for all, or pa for Pennsylvania

Example: API Query

Estimated % COVID-like illness on April 6, 2020 from the Facebook survey, in Allegheny County: https://api.covidcast.cmu.edu/epidata/api.php?endpoint=covidcast&data_source=fb-survey&signal=raw_cli&time_type=day&geo_type=county&time_values=20200406&geo_value=42003

library(jsonlite)
res = readLines("https://api.covidcast.cmu.edu/epidata/api.php?endpoint=covidcast&data_source=fb-survey&signal=raw_cli&time_type=day&geo_type=county&time_values=20200406&geo_value=42003")
prettify(res)
## {
##     "result": 1,
##     "epidata": [
##         {
##             "geo_value": "42003",
##             "signal": "raw_cli",
##             "time_value": 20200406,
##             "direction": null,
##             "issue": 20200903,
##             "lag": 150,
##             "value": 0.7614984,
##             "stderr": 0.3826746,
##             "sample_size": 434.8891
##         }
##     ],
##     "message": "success"
## }
## 

API Documentation

For full details, see the API documentation site. There you’ll also find details on:

As Of, Issues, Lag

By default the API returns the most recent data for each time_value. We also provide access to all previous versions of the data, using the following optional parameters:


Parameter To get data … Examples
as_of as if we queried the API on a particular date 20200406
issues published at a particular date or date range 20200406 or 20200406-20200410
lag published a certain number of time units after events occured 1 or 3

Data Revisions

Why would we need this? Because many data sources are subject to revisions:

This presents a challenge to modelers: e.g., we have to learn how to forecast based on the data we’d have at the time, not updates that would arrive later

To accommodate, we log revisions even when the original data source does not!

covidcast R Package

We also provide an R package called covidcast for API access. Highlights:

(Have an idea? File an issue or contribute a PR on our public GitHub repo)

Example: Backfill in Doctor’s Visits

The last two weeks of August in CA …

# Let's get the data that was available as of 09/22, for the end of August in CA
dv = covidcast_signal(data_source = "doctor-visits", 
                      signal = "smoothed_adj_cli",
                      start_day = "2020-08-15", end_day = "2020-08-31",
                      geo_type = "state", geo_values = "ca",
                      as_of = "2020-09-21")

# Plot the time series curve
xlim = c(as.Date("2020-08-15"), as.Date("2020-09-21"))
ylim = c(3.83, 5.92)
ggplot(dv, aes(x = time_value, y = value)) + 
  geom_line() +
  coord_cartesian(xlim = xlim, ylim = ylim) +
  geom_vline(aes(xintercept = as.Date("2020-09-21")), lty = 2) +
  labs(color = "as of", x = "Date", y = "% doctor's visits due to CLI in CA") +
  theme_bw() + theme(legend.pos = "bottom")

Example: Backfill in Doctor’s Visits (Cont.)

The last two weeks of August in CA …

# Now loop over a bunhch of "as of" dates, fetch data from the API for each one
as_ofs = seq(as.Date("2020-09-01"), as.Date("2020-09-21"), by = "week")
dv_as_of = map_dfr(as_ofs, function(as_of) {
  covidcast_signal(data_source = "doctor-visits", signal = "smoothed_adj_cli",
                   start_day = "2020-08-15", end_day = "2020-08-31", 
                   geo_type = "state", geo_values = "ca", as_of = as_of)
})

# Plot the time series curve "as of" September 1
dv_as_of %>% 
  filter(issue == as.Date("2020-09-01")) %>% 
  ggplot(aes(x = time_value, y = value)) + 
  geom_line(aes(color = factor(issue))) + 
  geom_vline(aes(color = factor(issue), xintercept = issue), lty = 2) +
  coord_cartesian(xlim = xlim, ylim = ylim) +
  labs(color = "as of", x = "Date", y = "% doctor's visits due to CLI in CA") +
  geom_line(data = dv, aes(x = time_value, y = value)) +
  geom_vline(aes(xintercept = as.Date("2020-09-21")), lty = 2) +
  theme_bw() + theme(legend.pos = "none")

Example: Backfill in Doctor’s Visits (Cont.)

The last two weeks of August in CA …

dv_as_of %>% 
  ggplot(aes(x = time_value, y = value)) + 
  geom_line(aes(color = factor(issue))) + 
  geom_vline(aes(color = factor(issue), xintercept = issue), lty = 2) +
  coord_cartesian(xlim = xlim, ylim = ylim) +
  labs(color = "as of", x = "Date", y = "% doctor's visits due to CLI in CA") +
  geom_line(data = dv, aes(x = time_value, y = value)) +
  geom_vline(aes(xintercept = as.Date("2020-09-21")), lty = 2) +
  theme_bw() + theme(legend.pos = "none")

Part 3: Dive Into Symptom Surveys

Part 3: Dive Into Symptom Surveys

Massive Symptom Survey

Through recruitment partnership with Facebook, we survey about 75,000 people daily (and over 10 million since it began in April), in the United States about:

A parallel, international effort by the University of Maryland reaches 100+ countries in 55 languages; over 20 million responses so far

Massive Symptom Survey (Cont.)

This is the largest non-Census research survey ever conducted (that we know of)

COVID-Like Illness

Using the survey data we generate daily, county-level estimates of:

(Note that COVID-like illness or CLI is defined as fever of at least 100 °F, along with cough, shortness of breath, or difficulty breathing. We also ask people to report on more rare symptoms)

Why % CLI-in-Community?

Why ask a proxy question (have people report on others)? Here’s Spearman correlations to COVID-19 case rates sliced by time:

# Fetch Facebook % CLI signal, % CLI-in-community signal and confirmed case
# incidence proportions
start_day = "2020-04-15"
end_day = "2020-10-28"
sympt1 = covidcast_signal("fb-survey", "smoothed_cli", 
                          start_day, end_day)
sympt2 = covidcast_signal("fb-survey", "smoothed_hh_cmnty_cli", 
                          start_day, end_day)
cases = covidcast_signal("usa-facts", "confirmed_7dav_incidence_prop", 
                         start_day, end_day)

# Consider only counties with at least 500 cumulative cases so far
case_num = 500
geo_values = covidcast_signal("usa-facts", "confirmed_cumulative_num",
                              max(cases$time), max(cases$time)) %>%
  filter(value >= case_num) %>% pull(geo_value)
sympt1_act = sympt1 %>% filter(geo_value %in% geo_values)
sympt2_act = sympt2 %>% filter(geo_value %in% geo_values)
cases_act = cases %>% filter(geo_value %in% geo_values)

# Compute correlations, per time, over all counties
df_cor1 = covidcast_cor(sympt1_act, cases_act, by = "time_value", 
                        method = "spearman")
df_cor2 = covidcast_cor(sympt2_act, cases_act, by = "time_value", 
                        method = "spearman")

# Stack rowwise into one data frame
df_cor = rbind(df_cor1, df_cor2)
df_cor$signal = c(rep("% CLI", nrow(df_cor1)), 
                  rep("% CLI-in-community", nrow(df_cor2)))

# Then plot correlations over time 
ggplot_colors = c("#FC4E07", "#00AFBB", "#E7B800")
ggplot(df_cor, aes(x = time_value, y = value)) + 
  geom_line(aes(color = signal)) +
  scale_color_manual(values = ggplot_colors[c(3,1)]) +
  labs(title = "Correlation between survey signals and case rates (by time)",
       subtitle = sprintf("Over all counties with at least %i cumulative cases",
                          case_num), x = "Date", y = "Correlation") +
    theme_bw() + theme(legend.pos = "bottom", legend.title = element_blank())

Why % CLI-in-Community? (Cont.)

Now here’s Spearman correlations to COVID-19 case rates sliced by county:

# Compute correlations, per time, over all counties
df_cor1 = covidcast_cor(sympt1_act, cases_act, by = "geo_value", 
                        method = "spearman")
df_cor2 = covidcast_cor(sympt2_act, cases_act, by = "geo_value", 
                        method = "spearman")

# Stack rowwise into one data frame
df_cor = rbind(df_cor1, df_cor2)
df_cor$signal = c(rep("% CLI", nrow(df_cor1)), 
                  rep("% CLI-in-community", nrow(df_cor2)))

# Then plot correlations as densities
ggplot(df_cor, aes(value)) + geom_density(aes(color = signal, fill = signal), 
                                          alpha = 0.4) +
  scale_color_manual(values = ggplot_colors[c(3,1)]) +
  scale_fill_manual(values = ggplot_colors[c(3,1)]) +
  labs(title = "Correlation between survey signals and case rates (by county)",
       subtitle = sprintf("Over all counties with at least %i cumulative cases",
                          case_num), x = "Date", y = "Correlation") +
    theme_bw() + theme(legend.pos = "bottom", legend.title = element_blank())

An Early Indicator?

Let’s take a look at case counts in Miami-Dade, from June 1 to July 15, and compare it to the % CLI-in-community indicator based on our survey:

# Fetch Facebook % CLI-in-community signal and confirmed case incidence numbers
# from June 1 to July 15
start_day = "2020-06-01"
end_day = "2020-07-15"
sympt = covidcast_signal("fb-survey", "smoothed_hh_cmnty_cli", 
                         start_day, end_day)
cases = covidcast_signal("usa-facts", "confirmed_7dav_incidence_num",
                         start_day, end_day)

# Function to transform from one range to another
trans = function(x, from_range, to_range) {
  (x - from_range[1]) / (from_range[2] - from_range[1]) *
    (to_range[2] - to_range[1]) + to_range[1]
}

# Function to produce a plot comparing the signals for one county
plot_one = function(geo_value, title = NULL, xlab = NULL,
                    ylab1 = NULL, ylab2 = NULL, legend =  TRUE) {
  # Filter down the signal data frames
  given_geo_value = geo_value
  sympt_one = sympt %>% filter(geo_value == given_geo_value)
  cases_one = cases %>% filter(geo_value == given_geo_value)
  
  # Compute ranges of the two signals
  range1 = cases_one %>% select("value") %>% range
  range2 = sympt_one %>% select("value") %>% range
  
  # Convenience functions for our two signal ranges
  trans12 = function(x) trans(x, range1, range2)
  trans21 = function(x) trans(x, range2, range1)

  # Find state name, find abbreviation, then set title
  state_name = fips_to_name(paste0(substr(geo_value, 1, 2), "000"))
  state_abbr = name_to_abbr(state_name)
  title = paste0(fips_to_name(geo_value), ", ", state_abbr)

  # Transform the combined signal to the incidence range, then stack
  # these rowwise into one data frame
  df = select(rbind(sympt_one %>% mutate_at("value", trans21),
                    cases_one), c("time_value", "value"))
  df$signal = c(rep("% CLI-in-community", nrow(sympt_one)),
                rep("New COVID-19 cases", nrow(cases_one)))
  
  # Finally, plot both signals
  pos = ifelse(legend, "bottom", "none")
  return(ggplot(df, aes(x = time_value, y = value)) +
           geom_line(aes(color = signal)) +
           scale_color_manual(values = ggplot_colors[1:2]) +
           scale_y_continuous(name = ylab1, limits = range1,
                              sec.axis = sec_axis(trans = trans12,
                                                  name = ylab2)) +
           labs(title = title, x = xlab) + theme_bw() +
           theme(legend.pos = pos, legend.title = element_blank()))
}

# Produce a plot for Miami-Dade, and add vertical lines
plot_one(name_to_fips("Miami-Dade"), xlab = "Date",
         ylab1 = "New COVID-19 cases",
         ylab2 = "% of people who know someone with CLI") +
  geom_vline(xintercept = as.numeric(as.Date("2020-06-19")),
             linetype = 2, size = 1, color = ggplot_colors[1]) +
  geom_vline(xintercept = as.numeric(as.Date("2020-06-25")),
             linetype = 2, size = 1, color = ggplot_colors[2])

An Early Indicator? (Cont.)

Ok, that was just one county… let’s look at the top 20 in terms of the rise in case counts:

num = 20
geo_values = cases %>% group_by(geo_value) %>%
  summarize(diff = last(value) - first(value)) %>%
  arrange(desc(diff)) %>% head(num) %>% pull(geo_value)

p_list = vector("list", num)
for (i in 1:num) {
  p_list[[i]] = plot_one(geo_values[i], legend = FALSE)
}
do.call(grid.arrange, c(p_list, nrow = 5, ncol = 4))

Simple Forecasting Demo

Notational setup: for location (county) \(\ell\) and time (day) \(t\), let

To predict case rates \(d\) days ahead, consider two simple models: \[ \begin{align*} & h(Y_{\ell,t+d}) \approx \alpha + \sum_{j=0}^2 \beta_j h(Y_{\ell,t-7j}) \quad \text{(Cases)} \\ & h(Y_{\ell,t+d}) \approx \alpha + \sum_{j=0}^2 \beta_j h(Y_{\ell,t-7j}) + \sum_{j=0}^2 \gamma_j h(F_{\ell,t-7j}) \quad \text{(Cases + Facebook)} \\ \end{align*} \]

For each forecast date, we train models on the most recent 14 days worth of data

Simple Forecasting Demo (Cont.)

Results from forecasts made over early May to late August (for details, read this blog post):

# This RData file was downloaded from  https://github.com/cmu-delphi/delphi-blog/tree/main/content/post/forecast-demo;
# the code for generating this RData file is also there
load("demo-extended.rda")

# Compute and plot median errors as function of number of days ahead
err_by_lead = res %>%
  select(-c(err3, err4)) %>%
  drop_na() %>%                                       # Restrict to common time
  mutate(err1 = err1 / err0, err2 = err2 / err0) %>%  # Compute relative error
                                                      # to strawman model
  ungroup() %>%
  select(-err0) %>%
  pivot_longer(names_to = "model", values_to = "err",
               cols = -c(geo_value, time_value, lead)) %>%
  mutate(model = factor(model, labels = c("Cases", "Cases + Facebook"))) %>%
  group_by(model, lead) %>%
  summarize(err = median(err)) %>% 
  ungroup()

ggplot(err_by_lead, aes(x = lead, y = err)) + 
  geom_line(aes(color = model)) + 
  geom_point(aes(color = model)) + 
  geom_hline(yintercept = err_by_lead %>% 
               filter(lead %in% 7, model == "Cases") %>% pull(err),
             linetype = 2, color = "gray") +
  scale_color_manual(values = c("black", ggplot_colors[1])) +
  labs(title = "Forecasting errors by number of days ahead",
       subtitle = sprintf("Over all counties with at least %i cumulative cases",
                          case_num),
       x = "Number of days ahead", y = "Median scaled error") +
  theme_bw() + theme(legend.pos = "bottom", legend.title = element_blank())

Latest Data Streams

Latest revision of the survey allows us to calculate new aggregates:

Mask Wearing

What did mask wearing look like as of mid-October? And how does it compare to % CLI-in-community?

day = "2020-10-15"
mask = covidcast_signal("fb-survey", "smoothed_wwearing_mask",
                        start_day = day, end_day = day, geo_type = "state")
sympt = covidcast_signal("fb-survey", "smoothed_whh_cmnty_cli",
                         start_day = day, end_day = day, geo_type = "state")

p1 = plot(mask, title = "% wearing masks in public most or all the time",
          range = c(55, 100), choro_col = c("#D9F0C2", "#BFE6B5", "#1F589F")) 
p2 = plot(sympt, title = "% who know someone who is sick", range = c(5, 40)) 
grid.arrange(p1, p2, nrow = 1)

Mask Wearing (Cont.)

Another look …

joined_data = inner_join(mask, sympt, by = "geo_value", 
                          suffix = c(".mask", ".cli"))
ggplot(joined_data, aes(x = value.mask, y = value.cli, 
                        label = toupper(geo_value))) +
  geom_text(size = 4, check_overlap = TRUE) +
  geom_smooth(method = "lm", se = FALSE, col = ggplot_colors[1]) +
  labs(x = "% wearing masks most/all the time in public",
       y = "New COVID-19 cases per 100,000 people",
       title = "Current COVID case rates and mask usage, by state") +
  theme_bw()

Wrapping Up

Delphi’s COVIDcast ecosystem has many parts:

  1. Unique relationships with partners in tech and healthcare granting us access to data on pandemic activity
  2. Code and infrastructure to build COVID-19 indicators, continuously-updated and geographically-comprehensive
  3. A historical database of all indicators, including revision tracking
  4. A public API (and R and Python packages) serving new indicators daily
  5. Interactive maps and graphics to display our indicators
  6. Forecasting and modeling work building on the indicators

In this pandemic, it’ll take an entire community to find answers to all the important questions. Please join ours!

Thanks


Delphi Carnegie Mellon University

Appendix

Appendix

Current Metadata

meta = covidcast_meta()
summary(meta)
## A `covidcast_meta` data frame with 362 rows and 15 columns.
## 
## Number of data sources : 11
## Number of signals      : 98
## 
## Summary:
## 
##  data_source           signal                           county msa hrr state
##  doctor-visits         smoothed_adj_cli                 *      *   *   *    
##  doctor-visits         smoothed_cli                     *      *   *   *    
##  fb-survey             raw_cli                          *      *   *   *    
##  fb-survey             raw_hh_cmnty_cli                 *      *   *   *    
##  fb-survey             raw_ili                          *      *   *   *    
##  fb-survey             raw_nohh_cmnty_cli               *      *   *   *    
##  fb-survey             raw_wcli                         *      *   *   *    
##  fb-survey             raw_whh_cmnty_cli                *      *   *   *    
##  fb-survey             raw_wili                         *      *   *   *    
##  fb-survey             raw_wnohh_cmnty_cli              *      *   *   *    
##  fb-survey             smoothed_cli                     *      *   *   *    
##  fb-survey             smoothed_hh_cmnty_cli            *      *   *   *    
##  fb-survey             smoothed_ili                     *      *   *   *    
##  fb-survey             smoothed_nohh_cmnty_cli          *      *   *   *    
##  fb-survey             smoothed_tested_14d              *      *   *   *    
##  fb-survey             smoothed_tested_positive_14d     *      *   *   *    
##  fb-survey             smoothed_wanted_test_14d         *      *   *   *    
##  fb-survey             smoothed_wcli                    *      *   *   *    
##  fb-survey             smoothed_wearing_mask            *      *   *   *    
##  fb-survey             smoothed_whh_cmnty_cli           *      *   *   *    
##  fb-survey             smoothed_wili                    *      *   *   *    
##  fb-survey             smoothed_wnohh_cmnty_cli         *      *   *   *    
##  fb-survey             smoothed_wtested_14d             *      *   *   *    
##  fb-survey             smoothed_wtested_positive_14d    *      *   *   *    
##  fb-survey             smoothed_wwanted_test_14d        *      *   *   *    
##  fb-survey             smoothed_wwearing_mask           *      *   *   *    
##  ght                   raw_search                              *   *   *    
##  ght                   smoothed_search                         *   *   *    
##  google-survey         raw_cli                          *      *   *   *    
##  google-survey         smoothed_cli                     *      *   *   *    
##  hospital-admissions   smoothed_adj_covid19             *      *   *   *    
##  hospital-admissions   smoothed_adj_covid19_from_claims *      *   *   *    
##  hospital-admissions   smoothed_covid19                 *      *   *   *    
##  hospital-admissions   smoothed_covid19_from_claims     *      *   *   *    
##  indicator-combination confirmed_7dav_cumulative_num    *      *   *   *    
##  indicator-combination confirmed_7dav_cumulative_prop   *      *   *   *    
##  indicator-combination confirmed_7dav_incidence_num     *      *   *   *    
##  indicator-combination confirmed_7dav_incidence_prop    *      *   *   *    
##  indicator-combination confirmed_cumulative_num         *      *   *   *    
##  indicator-combination confirmed_cumulative_prop        *      *   *   *    
##  indicator-combination confirmed_incidence_num          *      *   *   *    
##  indicator-combination confirmed_incidence_prop         *      *   *   *    
##  indicator-combination deaths_7dav_cumulative_num       *      *   *   *    
##  indicator-combination deaths_7dav_cumulative_prop      *      *   *   *    
##  indicator-combination deaths_7dav_incidence_num        *      *   *   *    
##  indicator-combination deaths_7dav_incidence_prop       *      *   *   *    
##  indicator-combination deaths_cumulative_num            *      *   *   *    
##  indicator-combination deaths_cumulative_prop           *      *   *   *    
##  indicator-combination deaths_incidence_num             *      *   *   *    
##  indicator-combination deaths_incidence_prop            *      *   *   *    
##  indicator-combination nmf_day_doc_fbc_fbs_ght          *      *       *    
##  indicator-combination nmf_day_doc_fbs_ght              *      *       *    
##  jhu-csse              confirmed_7dav_cumulative_num    *      *   *   *    
##  jhu-csse              confirmed_7dav_cumulative_prop   *      *   *   *    
##  jhu-csse              confirmed_7dav_incidence_num     *      *   *   *    
##  jhu-csse              confirmed_7dav_incidence_prop    *      *   *   *    
##  jhu-csse              confirmed_cumulative_num         *      *   *   *    
##  jhu-csse              confirmed_cumulative_prop        *      *   *   *    
##  jhu-csse              confirmed_incidence_num          *      *   *   *    
##  jhu-csse              confirmed_incidence_prop         *      *   *   *    
##  jhu-csse              deaths_7dav_cumulative_num       *      *   *   *    
##  jhu-csse              deaths_7dav_cumulative_prop      *      *   *   *    
##  jhu-csse              deaths_7dav_incidence_num        *      *   *   *    
##  jhu-csse              deaths_7dav_incidence_prop       *      *   *   *    
##  jhu-csse              deaths_cumulative_num            *      *   *   *    
##  jhu-csse              deaths_cumulative_prop           *      *   *   *    
##  jhu-csse              deaths_incidence_num             *      *   *   *    
##  jhu-csse              deaths_incidence_prop            *      *   *   *    
##  quidel                covid_ag_raw_pct_positive        *      *   *   *    
##  quidel                covid_ag_smoothed_pct_positive   *      *   *   *    
##  quidel                raw_pct_negative                        *       *    
##  quidel                raw_tests_per_device                    *       *    
##  quidel                smoothed_pct_negative                   *       *    
##  quidel                smoothed_tests_per_device               *       *    
##  safegraph             completely_home_prop             *              *    
##  safegraph             full_time_work_prop              *              *    
##  safegraph             median_home_dwell_time           *              *    
##  safegraph             part_time_work_prop              *              *    
##  usa-facts             confirmed_7dav_cumulative_num    *      *   *   *    
##  usa-facts             confirmed_7dav_cumulative_prop   *      *   *   *    
##  usa-facts             confirmed_7dav_incidence_num     *      *   *   *    
##  usa-facts             confirmed_7dav_incidence_prop    *      *   *   *    
##  usa-facts             confirmed_cumulative_num         *      *   *   *    
##  usa-facts             confirmed_cumulative_prop        *      *   *   *    
##  usa-facts             confirmed_incidence_num          *      *   *   *    
##  usa-facts             confirmed_incidence_prop         *      *   *   *    
##  usa-facts             deaths_7dav_cumulative_num       *      *   *   *    
##  usa-facts             deaths_7dav_cumulative_prop      *      *   *   *    
##  usa-facts             deaths_7dav_incidence_num        *      *   *   *    
##  usa-facts             deaths_7dav_incidence_prop       *      *   *   *    
##  usa-facts             deaths_cumulative_num            *      *   *   *    
##  usa-facts             deaths_cumulative_prop           *      *   *   *    
##  usa-facts             deaths_incidence_num             *      *   *   *    
##  usa-facts             deaths_incidence_prop            *      *   *   *    
##  youtube-survey        raw_cli                                         *    
##  youtube-survey        raw_ili                                         *    
##  youtube-survey        smoothed_cli                                    *    
##  youtube-survey        smoothed_ili                                    *

Access to Survey Microdata

Want to study a problem that can be answered with 10 million US survey responses since April? Possible topics:

Raw response data is freely available to researchers who sign a data use agreement to protect confidentiality of responses

We’re building a network of academic and non-profit researchers to learn from the survey. Join us!